Skip to content

FMA: Interop Transaction Handling #249

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Apr 15, 2025
Merged

FMA: Interop Transaction Handling #249

merged 5 commits into from
Apr 15, 2025

Conversation

axelKingsley
Copy link
Contributor

No description provided.

@axelKingsley axelKingsley force-pushed the fma-interopTx-filters branch from ce0c767 to 6d90225 Compare April 1, 2025 20:31
@tynes
Copy link
Contributor

tynes commented Apr 1, 2025

The framing of two extremes is a great introduction to this topic

@tynes
Copy link
Contributor

tynes commented Apr 1, 2025

Is there any consideration with the batcher? Did we add some new interop specific config?

@tynes
Copy link
Contributor

tynes commented Apr 1, 2025

There could be a failure mode of disk growth given we have to index a lot of data. The solution seems to be pruning, which we will know the depth at which we can prune once we get the final expiry window

@SozinM
Copy link

SozinM commented Apr 2, 2025

Another more esoteric option for checks:
When building block N we validate all cross transaction for to block N+1 with the timeout 2 seconds.
Then, while building block N+1 we would have very fresh transactions that we could include.
This assumes that it's possible to validate batch under 2 seconds, so they would be on time for the next block building round.
This will load supervisor as it would make a lot of checks every 2 seconds.
To improve this a bit we could validate only number of transaction that would most likely be included into the block (if we have 100k txs in mempool it's obvious that we won't include them all into the next block)

@axelKingsley
Copy link
Contributor Author

Another more esoteric option for checks: When building block N we validate all cross transaction for to block N+1 with the timeout 2 seconds. Then, while building block N+1 we would have very fresh transactions that we could include. This assumes that it's possible to validate batch under 2 seconds, so they would be on time for the next block building round. This will load supervisor as it would make a lot of checks every 2 seconds. To improve this a bit we could validate only number of transaction that would most likely be included into the block (if we have 100k txs in mempool it's obvious that we won't include them all into the next block)

@SozinM thanks for this idea! I think this is functionally equivalent to our decision to batch-evaluate these messages on a recurring timer. If the timer happened to match the block building time, you'd achieve the same effect, where transactions get evaluated in anticipation of the next block. Hooking it directly to block number sounds neat, but I wouldn't want to put validation in the way of our block building timing.

And I totally agree regarding only validating on block's worth in advance. The nice thing is that we can statically determine if transactions are interop or not, so if there are 100k tx in the pool, we will only need to look over them once to identify the interop ones, and batch-check those (which would be a small subset of all possible tx)

@axelKingsley axelKingsley marked this pull request as ready for review April 4, 2025 19:57
@axelKingsley
Copy link
Contributor Author

Takeaways from review meeting:

  • We are comfortable disabling interop as-needed, when processing interop leads to negative externalities for the rest of the chain
    • Expiring messages are the largest concern
    • Kelvin points out you can simply have the app layer re-emit messages
  • One big risk we'd like to track and understand is a chaotic cycle of sequencers initiating reorgs
    • The "Smarter" a sequencer might be to avoid invalid blocks, the more reorgs that get made
    • If there were too many reorgs, chains may stomp on each other and cause widespread issue
    • To mitigate this, we want lots of testing on the emergent behaviors, using dev networks with test sequencers.

@axelKingsley
Copy link
Contributor Author

Is there any consideration with the batcher? Did we add some new interop specific config?

Not really, or at least not with respect to transaction ingress. We've identified that the Batcher should likely avoid publishing content beyond the cross unsafe head, which reduces the risk of remote chain invalidation affecting the batch, and that was called out in the Supervisor FMA and now is mentioned in some related task lists: ethereum-optimism/optimism#15175

@axelKingsley
Copy link
Contributor Author

There could be a failure mode of disk growth given we have to index a lot of data. The solution seems to be pruning, which we will know the depth at which we can prune once we get the final expiry window

Even with fairly aggressive estimates, a cloud host would not be strained by the Supervisor's disk usage: ethereum-optimism/optimism#14919

Base automatically changed from fma-supervisor to main April 15, 2025 16:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants